Hill Climbing in Recurrent Neural Networks for Learning the a N B N C N Language
نویسندگان
چکیده
A simple recurrent neural network is trained on a one-step look ahead prediction task for symbol sequences of the context-sensitive a n b n c n language. Using an evolutionary hill climbing strategy for incremental learning the network learns to predict sequences of strings up to depth n = 12. Experiments and the algorithms used are described. The activation of the hidden units of the trained network is displayed in a 3-D graph and analysed.
منابع مشابه
Incremental training of first order recurrent neural networks to predict a context-sensitive language
In recent years it has been shown that first order recurrent neural networks trained by gradient-descent can learn not only regular but also simple context-free and context-sensitive languages. However, the success rate was generally low and severe instability issues were encountered. The present study examines the hypothesis that a combination of evolutionary hill climbing with incremental lea...
متن کاملHill Climbing in Recurrent Neural Networks for Learning the abc Language
A simple recurrent neural network is trained on a one-step look ahead prediction task for symbol sequences of the context-sensitive abc language. Using an evolutionary hill climbing strategy for incremental learning the network learns to predict sequences of strings up to depth n = 12. Experiments and the algorithms used are described. The activation of the hidden units of the trained network i...
متن کاملComparison of Genetic and Hill Climbing Algorithms to Improve an Artificial Neural Networks Model for Water Consumption Prediction
No unique method has been so far specified for determining the number of neurons in hidden layers of Multi-Layer Perceptron (MLP) neural networks used for prediction. The present research is intended to optimize the number of neurons using two meta-heuristic procedures namely genetic and hill climbing algorithms. The data used in the present research for prediction are consumption data of water...
متن کاملFractal Learning Neural Networks
One of the fundamental challenges to using recurrent neural networks (RNNs) for learning natural languages is the difficulty they have with learning complex temporal dependencies in symbol sequences. Recently, substantial headway has been made in training RNNs to process languages which can be handled by the use of one or more symbolcounting stacks (e.g., anbn, anAmBmbn, anbncn) with compelling...
متن کاملA Fast Re-scoring Strategy to Capture Long-Distance Dependencies
A re-scoring strategy is proposed that makes it feasible to capture more long-distance dependencies in the natural language. Two pass strategies have become popular in a number of recognition tasks such as ASR (automatic speech recognition), MT (machine translation) and OCR (optical character recognition). The first pass typically applies a weak language model (n-grams) to a lattice and the sec...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
دوره شماره
صفحات -
تاریخ انتشار 2007